LRU Stack Processing
نویسنده
چکیده
Stack processing, and in particular stack processing for the least recently used replacement algorithms, may present computational problems when it is applied to a sequence of page references with many different pages. This paper describes a new technique for LRU stack processing that permits efficient processing of these sequences. An analysis of the algorithm and a comparison of its running times with those of the conventional stack processing algorithms are presented. Finally we discuss a multipass implementation, which was found necessary to process trace data from a large data base system. Introduction Storage hierarchy evaluation is often accomplished by simulating the hierarchy under loads determined by “representative” address traces. Stack processing as proposed by Mattson, Gecsei, Slutz, and Traiger [ l ] allows efficient evaluation of multilevel hierarchies for a class of replacement algorithms called stack algorithms. Of these algorithms, least recently used (LRU) is the most extensively simulated. The original LRU stack processing algorithm proposed by Mattson et al. calculates for one page size a histogram of stack distances, which determines the frequency of access to each level of a multilevel linear hierarchy for any set of level capacities. The method involves the conversion of each address to a page reference and the maintaining of a list of pages called an LRU stack, wherein the pages are in order of the most recent reference. For each reference the stack distance, the position in the stack of the current referenced page, is obtained. To maintain the LRU stack and to obtain the stack distance for each reference, Mattson et al. proposed a concurrent search and update from the top down. The current page is placed on top of the stack, and each page in the stack is down-shifted by one until the current page is encountered. That position in the stack is recorded as the stack distance. If the current page is not found, i.e., if it has not occurred before, then downshifting proceeds to the bottom of the stack and a stack distance of infinity is recorded. The number of tests for a match with the current page is equal to the stack distance or, for a new page, to the number of distinct pages encountered so far. Thus, stackprocessing a trace that has a large number of distinct pages or a large average stack distance may require exJULY 1975 cessive computing time. Traiger and Slutz [ 2 ] showed that for a little additional overhead this method could also produce stack distances for page sizes that are successive multiples of the basic page size. However, for some data base reference traces and some program address traces the method was found not to be feasible for the page sizes of interest. In this paper we describe a new algorithm for LRU stack processing; this algorithm is much more efficient for the analysis of trace data for a single page size when the number of pages and the average stack distance are large, but separate page sizes require essentially separate calculations. Second, we present an analysis of the algorithm and a comparison of running times. Finally, we discuss a multipass implementation of the algorithm, which we have found necessary in order to efficiently process trace data from a large data base system. New LRU stack processing algorithm The purpose of the algorithm described here is to determine the stack distance for each reference. Rather than regarding the stack distance as the position in an LRU stack, we observe that the distance is equal to one plus the number of distinct pages that have been referenced since the current page was last referenced. If we denote the page referenced at time t by xf and the stack distance by d,, then we have dl = 1 + c(xp+l , . . ., X t l ) , where p = max i < t such that xi = xt , i.e., p is the position of the most recent past reference to page xt and C ( S ) is the number of distinct pages in S. 353 LRU STACK PROCESSING
منابع مشابه
Modified LRU Policies for Improving Second-Level Cache Behavior
Main memory accesses continue to be a significant bottleneck for applications whose working sets do not fit in second-level caches. With the trend of greater associativity in second-level caches, implementing effective replacement algorithms might become more important than reducing conflict misses. After showing that an opportunity exists to close part of the gap between the OPT and the LRU al...
متن کاملA Cache Model for Modern Processors
Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models use stack distances to predict LRU or its variants, and do not capture these high-performance policies. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partition...
متن کاملThe MT Stack: Paging Algorithm and Performance in a Distributed Virtual Memory System
Advances in parallel computation are of central importance to Artificial Intelligence due to the significant amount of time and space their programs require. Functional languages have been identified as providing a clear and concise way of programming parallel machines for artificial intelligence tasks. The problems of exporting, creating, and manipulating processes have been thoroughly studied...
متن کاملProcessing a multifold ground penetration radar data using common-diffraction-surface stack method
Recently, the non-destructive methods have become of interest to the scientists in various fields. One of these method is Ground Penetration Radar (GPR), which can provide a valuable information from underground structures in a friendly environment and cost-effective way. To increase the signal-to-noise (S/N) ratio of the GPR data, multi-fold acquisition is performed, and the Common-Mid-Points ...
متن کاملReuse-based Analytical Models for Caches
We develop a reuse distance/stack distance based analytical modeling framework for efficient, online prediction of cache performance for a range of cache configurations and replacement policies LRU, PLRU, RANDOM, NMRU. Such a predictive framework can be extremely useful in selecting the optimal parameters in a dynamic reconfiguration environment that performs power-shifting or resource realloca...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1975